5 research outputs found
Texture Mixer: A Network for Controllable Synthesis and Interpolation of Texture
This paper addresses the problem of interpolating visual textures. We
formulate this problem by requiring (1) by-example controllability and (2)
realistic and smooth interpolation among an arbitrary number of texture
samples. To solve it we propose a neural network trained simultaneously on a
reconstruction task and a generation task, which can project texture examples
onto a latent space where they can be linearly interpolated and projected back
onto the image domain, thus ensuring both intuitive control and realistic
results. We show our method outperforms a number of baselines according to a
comprehensive suite of metrics as well as a user study. We further show several
applications based on our technique, which include texture brush, texture
dissolve, and animal hybridization.Comment: Accepted to CVPR'1
SimpSON: Simplifying Photo Cleanup with Single-Click Distracting Object Segmentation Network
In photo editing, it is common practice to remove visual distractions to
improve the overall image quality and highlight the primary subject. However,
manually selecting and removing these small and dense distracting regions can
be a laborious and time-consuming task. In this paper, we propose an
interactive distractor selection method that is optimized to achieve the task
with just a single click. Our method surpasses the precision and recall
achieved by the traditional method of running panoptic segmentation and then
selecting the segments containing the clicks. We also showcase how a
transformer-based module can be used to identify more distracting regions
similar to the user's click position. Our experiments demonstrate that the
model can effectively and accurately segment unknown distracting objects
interactively and in groups. By significantly simplifying the photo cleaning
and retouching process, our proposed model provides inspiration for exploring
rare object segmentation and group selection with a single click.Comment: CVPR 2023. Project link: https://simpson-cvpr23.github.i
CM-GAN: Image Inpainting with Cascaded Modulation GAN and Object-Aware Training
Recent image inpainting methods have made great progress but often struggle
to generate plausible image structures when dealing with large holes in complex
images. This is partially due to the lack of effective network structures that
can capture both the long-range dependency and high-level semantics of an
image. To address these problems, we propose cascaded modulation GAN (CM-GAN),
a new network design consisting of an encoder with Fourier convolution blocks
that extract multi-scale feature representations from the input image with
holes and a StyleGAN-like decoder with a novel cascaded global-spatial
modulation block at each scale level. In each decoder block, global modulation
is first applied to perform coarse semantic-aware structure synthesis, then
spatial modulation is applied on the output of global modulation to further
adjust the feature map in a spatially adaptive fashion. In addition, we design
an object-aware training scheme to prevent the network from hallucinating new
objects inside holes, fulfilling the needs of object removal tasks in
real-world scenarios. Extensive experiments are conducted to show that our
method significantly outperforms existing methods in both quantitative and
qualitative evaluation.Comment: 32 pages, 18 figure
Structure-Guided Image Completion with Image-level and Object-level Semantic Discriminators
Structure-guided image completion aims to inpaint a local region of an image
according to an input guidance map from users. While such a task enables many
practical applications for interactive editing, existing methods often struggle
to hallucinate realistic object instances in complex natural scenes. Such a
limitation is partially due to the lack of semantic-level constraints inside
the hole region as well as the lack of a mechanism to enforce realistic object
generation. In this work, we propose a learning paradigm that consists of
semantic discriminators and object-level discriminators for improving the
generation of complex semantics and objects. Specifically, the semantic
discriminators leverage pretrained visual features to improve the realism of
the generated visual concepts. Moreover, the object-level discriminators take
aligned instances as inputs to enforce the realism of individual objects. Our
proposed scheme significantly improves the generation quality and achieves
state-of-the-art results on various tasks, including segmentation-guided
completion, edge-guided manipulation and panoptically-guided manipulation on
Places2 datasets. Furthermore, our trained model is flexible and can support
multiple editing use cases, such as object insertion, replacement, removal and
standard inpainting. In particular, our trained model combined with a novel
automatic image completion pipeline achieves state-of-the-art results on the
standard inpainting task.Comment: 18 pages, 16 figure
GeoFill: Reference-Based Image Inpainting of Scenes with Complex Geometry
Reference-guided image inpainting restores image pixels by leveraging the
content from another reference image. The previous state-of-the-art, TransFill,
warps the source image with multiple homographies, and fuses them together for
hole filling. Inspired by structure from motion pipelines and recent progress
in monocular depth estimation, we propose a more principled approach that does
not require heuristic planar assumptions. We leverage a monocular depth
estimate and predict relative pose between cameras, then align the reference
image to the target by a differentiable 3D reprojection and a joint
optimization of relative pose and depth map scale and offset. Our approach
achieves state-of-the-art performance on both RealEstate10K and
MannequinChallenge dataset with large baselines, complex geometry and extreme
camera motions. We experimentally verify our approach is also better at
handling large holes.Comment: 17 pages, 11 figure